这项研究的目的是开发一个强大的基于深度学习的框架,以区分Covid-19,社区获得的肺炎(CAP)和基于使用各种方案和放射剂量在不同成像中心获得的胸部CT扫描的正常病例和正常情况。我们表明,虽然我们的建议模型是在使用特定扫描协议仅从一个成像中心获取的相对较小的数据集上训练的,但该模型在使用不同技术参数的多个扫描仪获得的异质测试集上表现良好。我们还表明,可以通过无监督的方法来更新模型,以应对火车和测试集之间的数据移动,并在从其他中心接收新的外部数据集时增强模型的鲁棒性。我们采用了合奏体系结构来汇总该模型的多个版本的预测。为了初始培训和开发目的,使用了171 Covid-19、60 CAP和76个正常情况的内部数据集,其中包含使用恒定的标准辐射剂量扫描方案从一个成像中心获得的体积CT扫描。为了评估模型,我们回顾了四个不同的测试集,以研究数据特征对模型性能的转移的影响。在测试用例中,有与火车组相似的CT扫描,以及嘈杂的低剂量和超低剂量CT扫描。此外,从患有心血管疾病或手术病史的患者中获得了一些测试CT扫描。这项研究中使用的整个测试数据集包含51 covid-19、28 CAP和51例正常情况。实验结果表明,我们提出的框架在所有测试集上的表现良好,达到96.15%的总准确度(95%CI:[91.25-98.74]),COVID-119,COVID-96.08%(95%CI:[86.54-99.5],95%),[86.54-99.5],),,),敏感性。帽敏感性为92.86%(95%CI:[76.50-99.19])。
translated by 谷歌翻译
逆转录 - 聚合酶链反应(RT-PCR)目前是Covid-19诊断中的金标准。然而,它可以花几天来提供诊断,假负率相对较高。成像,特别是胸部计算断层扫描(CT),可以有助于诊断和评估这种疾病。然而,表明标准剂量CT扫描对患者提供了显着的辐射负担,尤其是需要多次扫描的患者。在这项研究中,我们考虑低剂量和超低剂量(LDCT和ULDCT)扫描方案,其减少靠近单个X射线的辐射曝光,同时保持可接受的分辨率以进行诊断目的。由于胸部放射学专业知识可能不会在大流行期间广泛使用,我们使用LDCT / ULDCT扫描的收集的数据集进行人工智能(AI)基础的框架,以研究AI模型可以提供人为级性能的假设。 AI模型使用了两个阶段胶囊网络架构,可以快速对Covid-19,社区获得的肺炎(帽)和正常情况进行分类,使用LDCT / ULDCT扫描。 AI模型实现Covid-19敏感性为89.5%+ - 0.11,帽敏感性为95%+ \ - 0.11,正常情况敏感性(特异性)85.7%+ - 0.16,精度为90%+ \ - 0.06。通过纳入临床数据(人口统计和症状),性能进一步改善了Covid-19敏感性为94.3%+ \ - PM 0.05,帽敏感性为96.7%+ \ - 0.07,正常情况敏感性(特异性)91%+ - 0.09,精度为94.1%+ \ - 0.03。所提出的AI模型基于降低辐射暴露的LDCT / ULDCT扫描来实现人级诊断。我们认为,所提出的AI模型有可能协助放射科医师准确,并迅速诊断Covid-19感染,并帮助控制大流行期间的传输链。
translated by 谷歌翻译
Increasingly taking place in online spaces, modern political conversations are typically perceived to be unproductively affirming -- siloed in so called ``echo chambers'' of exclusively like-minded discussants. Yet, to date we lack sufficient means to measure viewpoint diversity in conversations. To this end, in this paper, we operationalize two viewpoint metrics proposed for recommender systems and adapt them to the context of social media conversations. This is the first study to apply these two metrics (Representation and Fragmentation) to real world data and to consider the implications for online conversations specifically. We apply these measures to two topics -- daylight savings time (DST), which serves as a control, and the more politically polarized topic of immigration. We find that the diversity scores for both Fragmentation and Representation are lower for immigration than for DST. Further, we find that while pro-immigrant views receive consistent pushback on the platform, anti-immigrant views largely operate within echo chambers. We observe less severe yet similar patterns for DST. Taken together, Representation and Fragmentation paint a meaningful and important new picture of viewpoint diversity.
translated by 谷歌翻译
Climate change is threatening human health in unprecedented orders and many ways. These threats are expected to grow unless effective and evidence-based policies are developed and acted upon to minimize or eliminate them. Attaining such a task requires the highest degree of the flow of knowledge from science into policy. The multidisciplinary, location-specific, and vastness of published science makes it challenging to keep track of novel work in this area, as well as making the traditional knowledge synthesis methods inefficient in infusing science into policy. To this end, we consider developing multiple domain-specific language models (LMs) with different variations from Climate- and Health-related information, which can serve as a foundational step toward capturing available knowledge to enable solving different tasks, such as detecting similarities between climate- and health-related concepts, fact-checking, relation extraction, evidence of health effects to policy text generation, and more. To our knowledge, this is the first work that proposes developing multiple domain-specific language models for the considered domains. We will make the developed models, resources, and codebase available for the researchers.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
There is a global aging population requiring the need for the right tools that can enable older adults' greater independence and the ability to age at home, as well as assist healthcare workers. It is feasible to achieve this objective by building predictive models that assist healthcare workers in monitoring and analyzing older adults' behavioral, functional, and psychological data. To develop such models, a large amount of multimodal sensor data is typically required. In this paper, we propose MAISON, a scalable cloud-based platform of commercially available smart devices capable of collecting desired multimodal sensor data from older adults and patients living in their own homes. The MAISON platform is novel due to its ability to collect a greater variety of data modalities than the existing platforms, as well as its new features that result in seamless data collection and ease of use for older adults who may not be digitally literate. We demonstrated the feasibility of the MAISON platform with two older adults discharged home from a large rehabilitation center. The results indicate that the MAISON platform was able to collect and store sensor data in a cloud without functional glitches or performance degradation. This paper will also discuss the challenges faced during the development of the platform and data collection in the homes of older adults. MAISON is a novel platform designed to collect multimodal data and facilitate the development of predictive models for detecting key health indicators, including social isolation, depression, and functional decline, and is feasible to use with older adults in the community.
translated by 谷歌翻译
随着沉浸式视频序列的快速增长,实现无缝和高质量的压缩3D含量更为关键。 MPEG最近开发了一种基于视频的点云压缩(V-PCC),用于动态点云编码。但是,使用V-PCC进行重建的点云会遭受不同的工件的影响,包括在应用现有视频编码技术之前在预处理过程中丢失数据,例如高效视频编码(HEVC)。贴片世代和2D投影中3D的自封点是使用V-PCC丢失数据的主要原因。本文提出了一种新方法,将重叠切片作为贴片生成的替代方法,以减少生成的贴片数量和丢失的数据量。在提出的方法中,整个点云已根据自锁定点的数量将整个点云分为横截面,以便在斑块生成过程和投影中可以最大程度地减少数据丢失。为此,考虑了可变数量的层,部分重叠以保留自锁定点。所提出的方法的额外优势是减少位置的需求并使用切片底座编码几何数据。实验结果表明,与标准的V-PCC方法相比,提出的方法比标准V-PCC方法更灵活,改善了率延伸性能,并且与标准V-PCC方法相比,数据丢失显着降低。
translated by 谷歌翻译
尽管深度神经网络在解决面部对齐方面取得了合理的准确性,但它仍然是一项艰巨的任务,特别是当我们处理面部图像,闭塞或极端头部姿势时。基于热图的回归(HBR)和基于坐标的回归(CBR)是面部比对的两种主要使用方法之一。 CBR方法需要更少的计算机内存,尽管它们的性能小于HBR方法。在本文中,我们提出了一种基于自适应坐标的回归(ACR)损失,以提高CBR对面对对准的准确性。受主动形状模型(ASM)的启发,我们生成平滑面对象,与地面真相标记点相比,一组面部标志点具有更少的变化。然后,我们引入了一种方法来估计通过比较地面真相标记点和相应的平滑面对象的分布来预测网络的每个地标点的难度水平。我们提出的ACR损失可以根据预测面部中每个地标点的难度水平来适应其曲率和损失的影响。因此,ACR损失指导网络朝着具有挑战性的点而不是更容易的点,这提高了面部对齐任务的准确性。我们的广泛评估表明,拟议的ACR损失在预测各种面部图像中的面部标志点方面的能力。
translated by 谷歌翻译
We propose Distribution Embedding Networks (DEN) for classification with small data. In the same spirit of meta-learning, DEN learns from a diverse set of training tasks with the goal to generalize to unseen target tasks. Unlike existing approaches which require the inputs of training and target tasks to have the same dimension with possibly similar distributions, DEN allows training and target tasks to live in heterogeneous input spaces. This is especially useful for tabular-data tasks where labeled data from related tasks are scarce. DEN uses a three-block architecture: a covariate transformation block followed by a distribution embedding block and then a classification block. We provide theoretical insights to show that this architecture allows the embedding and classification blocks to be fixed after pre-training on a diverse set of tasks; only the covariate transformation block with relatively few parameters needs to be fine-tuned for each new task. To facilitate training, we also propose an approach to synthesize binary classification tasks, and demonstrate that DEN outperforms existing methods in a number of synthetic and real tasks in numerical studies.
translated by 谷歌翻译
在早期设计阶段,应为所需室内环境质量(IEQ)进行太阳阴凉设计。这个领域可能非常具有挑战性和耗时需要专家,复杂的软件和大量的钱。本研究的主要目的是设计一个简单的工具来研究各种型号的太阳阴影,并在早期阶段更容易且更快地做出决定。数据库生成方法,人工智能和优化已被用于实现这一目标。该工具包括两个主要部分。预测用户所选模型的性能以及提出有效参数和2.向用户提出最佳预准备模型。在这方面,最初,具有可变参数的侧向升鞋盒模型是参数建模的,并且将五种常见的太阳阴影模型应用于空间。对于每个太阳阴影和没有阴影的状态,模拟了与日光和眩光,视图和初始成本有关的指标。本研究中生成的数据库包括87912个替代和六个计算的指标,引入优化的机器学习模型,包括神经网络,随机Forrest,支持向量回归和K最近邻居。根据结果​​,最准确和最快的估计模型是随机的Forrest,R2_Score为0.967至1.然后,进行敏感性分析以确定每个阴影模型的最有影响力的参数和没有它的状态。这种分析区分了最有效的参数,包括窗口方向,WWR,房间宽度,长度和阴影深度。最后,通过利用NSGA II算法优化机器学习模型的估计功能,识别了大约7300个最佳模型。开发的工具可以为每个设计的各种设计替代品评估各种设计替代品。
translated by 谷歌翻译